Harnessing the Power of Interaction
نویسندگان
چکیده
Inspired by Peter Wegner’s analysis of the paradigm shift from algorithms to interaction and of his conclusion that “proving correctness of interactive systems is not merely difficult but impossible,” we outline some consequences that we see inevitable for the specification and design of interactive systems. The technical feasibility of the proposed solutions has been justified by theoretical results and practical experimentation. Although many of the claims have been widely accepted in academic research, and none of them is new, their practical significance has not yet been generally recognized in software engineering. In particular, most proponents of object-oriented specification and design seem to be unaware of them. 1 From Algorithms to Interaction Interaction is an essential characteristic of modern software and of complex artifacts constructed as embedded systems. As a consequence, the essence of computing is no longer in the output values computed for given input data, but in the temporal behaviors that a system is capable of generating in cooperation with its environment. In software design practice the new interactive paradigm is reflected in object-orientation and in the use of scenarios and use cases, embedded in novel approaches like UML [23], for instance. The algorithmic paradigm, which treats programs as input/output transformations, has clearly become insufficient, but is still commonly used as the basis for understanding software. For instance, most formalizations of object-oriented concepts are still essentially algorithmic in nature. In his CACM article “Why interaction is more powerful than algorithms,” Peter Wegner analyzes the theoretical significance of this paradigm shift that “captures the technology shift from mainframes to workstations and networks, from number-crunching to embedded systems and graphical user interfaces, and from procedure-oriented to object-based and distributed programming” [20]. Defining “interaction machines” as a generalization of Turing machines he notes an essential increase in the associated complexity of reasoning, concluding that “proving correctness of interactive systems is not merely difficult but impossible.” The purpose of this paper is to analyze the significance that this complexity has for the engineering of interactive systems. The main conclusion is that, since interactive systems are not natural phenomena but artifacts, it is possible to avoid the complexity that would be inherent in dealing with arbitrary interactive systems. This requires, however, integration of the theoretical foundations of interaction into the design methods that are used in practice, and effective use of proper abstractions. While “the irreducibility of interaction to algorithms enhances the intellectual legitimacy of computer science as a discipline distinct from mathematics” [20], we believe that a similar enhancement is then due in the intellectual basis of software engineering. The structure of the rest of the paper is as follows. The fundamental principles in the modeling of interactive systems are discussed in Section 2. Section 3 is the core of the paper, where we outline how the complexity of interactive systems can be mastered by effective use of abstractions during the design process. The role of components and object-orientation is revisited in Section 4, and the paper ends in some concluding remarks in Section 5. 2 Modeling of Interactive Systems Before addressing how the inherent complexity of interactive systems can be mastered we give a brief analysis of some basic issues in their modeling. In particular, we discuss the inadequacy of such basic principles as encapsulated methods and the open-system paradigm, which are usually accepted without criticism. 2.1 Interaction and Encapsulated Methods To emphasize interaction, the word “behavior” is used in object-oriented terminology, for instance. The algorithmic paradigm is, however, still evident in defining “behavior” as the collection of methods – input-output transformations – available for an object. It is a most elementary observation that temporal and cooperative aspects are essential in all interactive behaviors. As “it takes two to tango,” an isolated object generates no behaviors. Claim 1 Temporal and collective aspects are in the very heart of interactive behavior. An inherent characteristic of the algorithmic paradigm is that software entities are described in isolation from the contexts where they are (intended to be) used. Whether this is useful or not depends on what one is interested in. Obviously, such descriptions are needed in the implementation of reusable components, for instance. From this one should not, however, draw the conclusion that the same would be true for earlier stages of software development. Independent description of components requires detailed specification of their (static) interfaces, which is irrelevant in specifying what these components are intended to do (dynamically) in cooperation. Therefore, from the viewpoint of system specification, traditional specification of objects in terms of attributes and encapsulated methods has two severe shortcomings: strong bias towards eventual implementation, and failure to describe the intended dynamic behaviors [11]. The awkwardness of such an approach to system specification would be evident in any attempt to describe tango by first defining the encapsulated methods that the two partners would need. Claim 2 By requiring detailed interface definitions, the encapsulation of methods in objects introduces unnecessary complexity in the specification and modeling of interactive behaviors. Some recognition of these problems is reflected in current work where dynamic behaviors are expressed as scenarios or use cases. The fundamental importance of dynamic behaviors is, however, not fully understood when they are given an auxiliary or informal role only, as it usually is the case. Claim 3 Dynamic behaviors are more fundamental for rigorous reasoning on interactive systems than methods and interfaces, and should be treated as such. 2.2 Interaction and Concurrency Interaction and concurrency are intimately tied together. On one hand, coordination of concurrency requires interaction. On the other hand, the existence of concurrent parties is always inherent in interaction. Claim 4 Interaction and concurrency are inseparable. The notion of methods is essentially based on a sequential model of computing. Although degenerate temporal behaviors are possible, where only one of the cooperating parties is active at a time while all others are passively waiting, it is unreasonable to adopt such a sequential starting point in dealing with inherently interactive behaviors. This explains why conventional object-orientation leads to complications with concurrent objects. It is true that computer architectures – and hence also implementations of software – are basically sequential, but such implementation-oriented aspects should not constrain early stages of system development. Claim 5 Adding concurrency control mechanisms to an essentially sequential model of computing is an unnecessarily complex way of dealing with interactive behaviors. Starting with an essentially sequential model of computing not only results in unnecessary complexity; it also leads to ignoring some essential properties. A prime example of this is liveness properties.1 In sequential computing one only needs the simple implicit liveness assumption that a program never halts prematurely. Therefore, concepts derived from sequential models hide liveness properties, and all interest is then focused on safety properties.2 However, responsiveness in interactive systems is based on nontrivial liveness assumptions, which have to be made explicit. Claim 6 Basically sequential models of computing ignore liveness properties, which are essential in interactive behaviors. By definition, algorithms are deterministic. However, lack and deliberate hiding of information, as well as postponement of design decisions, inevitably lead to nondeterministic modeling. Dealing with concurrency, unknown relative speeds and communication delays make nondeterminism a necessity even at the programming-language level. Claim 7 Nondeterminism is inherent in the modeling of interactive behaviors. 2.3 Closed-System Principle The development of theories is guided by the questions asked. Theories of computing have usually asked what programs, procedures, objects, etc., are as mathematical entities. Since the isolation of program units from their environments is inherent already in these questions, the answers cannot do otherwise. Focusing on interactive behaviors requires to ask what interactions are, and what the entities are that generate them. Since open systems do not generate behaviors by themselves, we conclude: 1Informally, a liveness property is a property of the form “something good will eventually happen.” 2Informally, a safety property is of the form “nothing bad ever happens.” Every property of temporal behaviors is a conjunction of a safety property and a liveness property. Claim 8 Dealing with interactive behaviors, the basic entities to be described are not isolated program units, but closed systems that include also the environments in which such units are used. A common argument against closed-system specifications is that, since an implementation has no control over its environment, it makes no sense to try to constrain it in a specification. However, also every open-system specification makes some assumptions about its environment. In particular, every specification of an embedded system “constrains” its environment in this sense, because it can never specify all interactions that are physically possible. In the light of this trivial observation, closed-system specifications only extend the possibilities to express such assumptions. They also make the situation symmetric, so that similar properties can be specified/assumed for all partners of interaction. Another argument against including the environment in a specification is that one cannot realistically model human behavior, for instance. While true, this does not exclude nondeterministic user models that are conservative approximations of what the users may do. In each case, all interesting properties of interactive systems depend on some assumptions of this kind, and the feasibility of such assumptions can be analyzed only if they are made explicit. In addition to being intuitively natural, closed-system models are often simpler to deal with than their open counterparts. Liveness properties of two-phase locking in serializable databases provides a good example of this. For database requests it is important that each request is eventually responded. In two-phase locking this liveness property depends essentially on the fact that the environment will eventually terminate each transaction, which again depends on the system eventually responding to each request. Considering the system and its environment as open components therefore leads to what looks like circular reasoning, whereas no similar problems arise in closed-system modeling. 2.4 Basic Interactions In object-oriented programming, objects interact through method invocations. Based essentially on algorithmic thinking, methods are, however, unsuitable for interaction primitives in an inherently concurrent world. What is missing in algorithmic theories – and is therefore added to them by a variety of implementation-oriented mechanisms – is atomicity of execution: an action is atomic if, once started, it will eventually be completed without interference from other actions. Since atomicity is more important for reasoning than the mechanisms by which it can be implemented, it needs to be recognized as a fundamental abstraction in dealing with interactive systems: Claim 9 Any reasonable approach to manage interaction must be based on explicitly specified atomic actions. Basically, interaction between components can take place through shared actions or events (event-based approaches) or shared variables (state-based approaches). In event-based approaches, externally visible events in one component can be synchronized with those in other components, resulting in shared events that are executed cooperatively. The composite system may again either hide an event or make it visible to further potential components. In state-based approaches, each action is assumed to be executed by a unique component, but, in addition to local variables, it may access external variables that are local to other components. Correspondingly, a component may permit other components to access some of its variables. Taking another look at open and closed systems one notices that, to some extent, this distinction is in the eye of the beholder. The reason is that an open system can always be identified with the closed system where the given open system is used nondeterministically in the most general way possible. For state-based models this means implicit inclusion of an environment that accesses and modifies externally accessible variables in an arbitrary fashion allowed by the system. In event-based systems the distinction between open and closed systems is even more subtle, since a component can execute its visible events also alone (i.e., as a closed system), if no other components are provided. 2.5 Need for Common Conceptual Basis A number of theories and formal execution models have been developed for interactive systems, like temporal logics, process algebras, labeled transition systems, Petri nets, statecharts, and input-output automata. Unfortunately these are often incompatible with each other, and the same terms may be used in confusingly different meanings. The lack of a uniform conceptual basis makes it difficult for software engineers to effectively utilize them. Claim 10 Until a common conceptual basis and compatible terminology can be agreed on, theories of interaction will not have the impact on practice that they deserve. 3 Mastering the Complexity of Interaction Conventional methods of software design are implicitly based on the algorithmic paradigm. Trying to cope with interaction has led to amendments and additional concepts, but the fundamental basis has remained the same. As is typical in connection with paradigm changes, this has extended the life of the old paradigm, but at the cost of a complex and heterogeneous set of concepts. Ultimately, such attempts seem to be doomed to failure. In this section we will outline how the new paradigm should be reflected in design methods, in order to render the potential complexity of interactive systems manageable. 3.1 Avoiding Complexity The theoretical complexity, discussed by Wegner, makes it unreasonable to try to prove properties of arbitrary interactive systems. Our interpretation of this for software engineering is that informal methods are insufficient for them, and formal methods are of only little help, if one tries to use them a posteriori, or as an add-on to an informal software process. Claim 11 A theory of interaction needs to be integrated into practical specification and design methods. Fortunately, instead of arbitrary interactive systems we only need to deal with those that we have constructed. According to Dijkstra [4], “Because we are dealing with artifacts, all unmastered complexity is of our own making (...), so we had better learn how not to introduce such complexity in the first place.” Structured programming and the design methods inspired by it [3] have shown how to do this for algorithmic computing. If structuring and theoretically justified design methods are useful for algorithmic software, they are a necessity for the more complex domain of interactive computing. It might be thought that decomposing an interactive system into components would provide the necessary structuring. This is not, however, true. As shown by Lamport [14], decomposition into open components does not make proofs any easier. On the contrary, it often makes them harder. Claim 12 Components are not the proper units for structuring the specification of an interactive system. Since no nontrivial specification can be understood in one piece, logical structuring is needed in its construction. In principle, this structuring is independent of decomposition, although this is obscured by conventional design methods. Claim 13 Decomposition into components is orthogonal to logical structuring. Decomposition is useful in dividing a design task into smaller subtasks, but one cannot really decompose anything before knowing what to decompose. While clues for eventual decomposition are often visible already in early stages of specification and design, actual decomposition is useful only when a rigorous specification is available. Claim 14 Collective behaviors should be specified before decomposition. A key to mastering complexity is not decomposition but simplification by abstraction. It is well-known that in reverse engineering and re-engineering of existing systems it is crucial to recognize the implicit abstractions that have been implemented. Unfortunately, if the design was not explicitly based on appropriate abstractions, there is no guarantee that the system would implement any. Claim 15 To guarantee that an interactive system implements reasonable abstractions, these have to be made explicit within the specification and design process. In the next subsection we will analyze what such abstractions can be. Abstractions can be utilized by refinement methods, where systems are first specified or designed at a high level of abstraction, and the correctness of these abstract views is preserved throughout the design process. For instance, when asynchronous communication is used to implement synchronous communication abstractions, the system can first be specified at this higher level of abstraction. If the crucial system properties are shown to hold at this level, and the refinement method guarantees their preservation, unmastered complexity is not introduced by the use of asynchronous communication [10]. Doing this requires, however, that theory and practice be better integrated than they are in today’s state of the art: Claim 16 Practical specification and design methods have to support rigorous refinement of abstract models. Although we consider support for rigorous refinements to be essential, we do not claim that everything should be done formally. In fact, we feel that the focus is often wrong in the discussion on formal methods, when the importance of formalization is emphasized. Formalization of programming concepts, for instance, preserves their complexity, while proper abstractions contribute towards simplicity, even when used without formal rigor. 3.2 Abstractions of Interactive Systems As outlined above, a well mastered specification and design process needs to proceed through abstractions. This leads to the following: Claim 17 Specifications should be structured with abstractions of the final system as units of modularity. Instead of an implementation-oriented component architecture this leads to logical structuring of specifications. To see in more detail what we mean by abstractions, let us consider behaviors. An abstraction of a temporal behavior is another temporal behavior that is a correct but less detailed “view” of the former. For state-based behaviors, an abstraction need not contain all the variables, for instance. Also, “concrete” variables may be represented in it by “abstract” variables that are less suited for direct implementation. Similarly, events and components can be abstracted away from event-based behaviors. For a closed-system model of an interactive system, its abstractions are also closed systems. For each behavior in such a (less abstract) model these (more abstract models) need to generate an abstraction of it. More specifically, a more abstract model may be a “projection,” where some parts of the system have been abstracted away, may be less deterministic by having some constraints abstracted away, and may have data in more abstract representations. For effective use of abstractions, the specification and design process can proceed in steps where more detail is incrementally added, and the level of abstraction is gradually lowered. Claim 18 Incremental specification/design should proceed by refining and synthesizing layers, which are closed-system abstractions of the final closed system. Since object-oriented patterns can be understood as layers of this kind [17], some need for layered structuring has already been recognized in practice, although their use is not effectively supported by the design methods that are commonly used. Most formal methods also fail to support them. The idea of such layers is not new. Similar notions have been used, for instance, in program slicing [21] and in projections suggested for program verification [12]. However, the process proceeds here in the opposite direction, i.e., from high-level abstractions towards models that are more implementation-oriented. 3.3 Effective Use of Abstractions Effective use of abstractions of interactive systems requires a formal framework in which closed systems can be refined and synthesized. In event-based systems, when a component is considered in isolation of its enviroment, it is a standalone system, as was discussed above. Therefore, such a component can also be understood as an abstraction of the composed system. The constraint-oriented specification style, proposed in connection with LOTOS [2], utilizes this fact, and can be understood as a restricted form of layering, where component structure more or less coincides with logical modularity. In state-based approaches, temporal logics are frequently used for property-oriented specification. Temporal Logic of Actions (TLA) [13] is one of their varieties, and its canonical formulas are also suited for operational modeling of closed systems. Refinement and synthesis of specification layers correspond in it to logical implication and conjunction, respectively, which makes it an intuitively natural formal basis for rigorous derivation of specifications. For an outline of a proposed TLA-based design method the reader is referred to [9]. In incremental specification of behaviors, divide and conquer requires the possibility to introduce solutions to different design problems in independent refinements of the same layer, and to synthesize the resulting layers in the end. Such a layered structure is useful also for software maintenance, where it supports aspect-oriented management of software evolution [18]. As far as safety properties are concerned, there are no essential problems in the synthesis of specification layers. On the other hand, preservation of liveness properties is impossible in general case, since different refinements may introduce conflicting liveness properties. Such problems do not, however, arise when the different refinements address independent aspects of the system. 4 Components and Object-Orientation We have stated above that logical structuring is needed in specifications instead of component architecture, and that conventional object-orientation requires too early decisions on component interfaces. In this section we analyze the role that components and object-orientation still have. 4.1 Components in Closed-System Models As discussed above, one of the main advantages of closed-system modeling is that interactive behaviors can be specified at a level of abstraction where detailed interfaces between components have not yet been designed. Even though components should not, in general, determine the logical modularity of a specification, the internal structure of a closed-system layer may preindicate an eventual component structure. Then it also defines interfaces between components, although possibly at a level of abstraction which is not suited for direct implementation. When a refinement of a closed-system model refines actions or data representations that are associated with component interfaces, these interfaces are effectively refined. Therefore, closed-system specifications are eminently suitable for interface refinement. If desired, a closed system can be “opened” into a parallel composition of open systems [1], which still is the dominating paradigm in thinking about interaction. At the level of specification there is, however, no advantage in doing this. Instead, the same closed-system model can simply be understood as a specification of each of its components, with the rest of the specification serving as an environment model to the component under consideration [16]. With this view, division of labor may lead to a need to refine independently each of the components in a closed-system layer. If these refinements do not modify any component interfaces, there are no difficulties in their synthesis. It is, however, often desirable for a component refinement also to refine its interface. In particular, experiments with design methods for layered specifications have shown the need for a refined component to temporarily refuse an external action. In event-based models, where actions are shared by the components, this is a natural idea, while in state-based models it has usually been overlooked. 4.2 Object-Orientation in Specifications Modeling and reuse are two very different motivations for object-oriented programming [15]. From the viewpoint of modeling, the conventional programming-level abstractions are not, however, helpful for mastering the complexity of interactive systems. Objects could therefore be taken just as another popular fad, as suggested in [14]. Still, the notions of objects, classes and inheritance seem to reflect the way we organize our understanding of the world. Instead of taking objects as units of modularity, they could therefore be used in a specification layer to preindicate eventual component structure. The genericity that is usually present in classes can then be elevated to the level of layers: Claim 19 Layers with object-oriented classes can be generic patterns with undefined numbers of objects. Obviously, such genericity is needed in the formalization of object-oriented patterns [17]. For effective use of generic layers it is important that their properties hold for any concrete instantiations. Two modifications are important in adapting object-oriented concepts to the needs of reasoning on interactive behaviors. Firstly, premature definition of interfaces has to be prevented. This can be achieved by replacing single-object methods by cooperative actions that may involve several participants. The need for such a generalization has been independently recognized also in practical information systems modeling [7] and in standards work on object models [22]. However, effective use of inheritance then requires the following generalization: Claim 20 In dealing with cooperative behaviors, inheritance of single-object methods needs to be generalized into inheritance of capabilities to participate in cooperative atomic actions. Secondly, allowing arbitrary modifications to inherited actions may be desirable for reuse at the level of implementations. From the viewpoints of modeling and reasoning, subclasses and inheritance should, however, be restricted to strict specialization. Fortunately this is possible with an intuitively natural, strong interpretation of the “is-a” relationship between subclasses and base classes: Claim 21 Subclasses can be used in specifications so that every object of a subclass satisfies all properties specified for its base class(es). 5 Concluding Remarks With a history of only fifty years, software has become a universally applicable basic technique in the construction of complex systems. In contrast to more traditional areas of engineering, software engineering is largely considered an art where theoretical formalizations of the entities it deals with – software – are irrelevant for practice. In this situation, thinking in software engineering is guided by languages, tools, and informal design methods. A “typical” software engineer couldn’t care less about a theoretical paradigm shift, as long as practical tools are available. Still, theoretical foundations have become essential in formal methods, but these are not yet in common use, and software engineers would like to take also them in terms of additional tools with no effect on their thinking. Starting from Wegner’s observations on the paradigm shift from algorithms to interaction and its effect on the complexity of reasoning, we conclude that this situation is bound to change. We argue, however, that the conceptual basis that is currently used for interactive systems is essentially algorithmic and much too complicated to support such a change. While appropriate fundamental abstractions have not been recognized, complicated mechanisms have been added on top of algorithmic concepts. Believing in the gospel according to St. Edsger, “all through history, simplifications have had a much greater long-range scientific impact than individual feats of ingenuity” [4], we outline in this paper how the power of interaction can be harnessed by simplicity. This requires, however, that theoretical understanding of interactive systems is properly integrated into design methods. Such integration will also affect the abstractions in terms of which software engineers need to think about computing and the specification and design process.
منابع مشابه
Assessment of the Potential of Harnessing Tidal Energy in the Khowr-e Musa Estuary in the Persian Gulf
Today, the widespread use of fossil fuels is caused many problems in the world, which include: Ozone depletion, the increase carbon dioxide in the atmosphere, growing recognition of climate change impacts and decreasing fossil fuel resources. These issues have led to an increased interest in the mass generation of electricity from renewable sources such tidal energy. The Khowr-e Musa Estuary, l...
متن کاملNonlinear Analysis of Interaction with SVC in Stressed Power Systems: Effect of SVC Controller Parameters
In this paper, the effect of Static VAr Compensator (SVC) parameters on the nonlinear interaction of steam power plant turbine-generator set is studied using the Modal Series (MS) method. A second order representation of a power system equipped with SVC is developed and then by MS method the nonlinear interaction of torsional modes is assessed under various conditions and the most influencing f...
متن کاملPublic Participation: More than a Method?; Comment on “Harnessing the Potential to Quantify Public Preferences for Healthcare Priorities through Citizens’ Juries”
While it is important to support the development of methods for public participation, we argue that this should not be at the expense of a broader consideration of the role of public participation. We suggest that a rights based approach provides a framework for developing more meaningful approaches that move beyond public participation as synonymous with consultation to value the contribution ...
متن کاملA New Class of Decentralized Interaction Estimators for Load Frequency Control in Multi-Area Power Systems
Load Frequency Control (LFC) has received considerable attention during last decades. This paper proposes a new method for designing decentralized interaction estimators for interconnected large-scale systems and utilizes it to multi-area power systems. For each local area, a local estimator is designed to estimate the interactions of this area using only the local output measurements. In fact,...
متن کاملبررسی پتانسیل انرژی باددراستان خراسان شمالی در ایران
The positive effects of renewable energies have forced many countries to draw attention to clean environmentally friendly energy sources such as wind. However, the first step for harnessing the wind energy is to locate regions with good potential for turbine installation. The use of wind energy to provide steady and reliable supply of electricity is on the increase. In this work, Weibull probab...
متن کاملNumerical Investigation of the Effect of Bubble-Bubble Interaction on the Power of Propagated Pressure Waves
The study of bubble dynamics, especially the interaction of bubbles, has drawn considerable attention due to its various applications in engineering and science. Meanwhile, the study of the oscillation effect of a bubble on the emitted pressure wave of another bubble in an acoustic field which has less been investigated. This issue is investigated in the present study using the coupling of Kell...
متن کامل